While many systems have been developed to train Graph Neural Networks (GNNs), efficient model inference and evaluation remain to be addressed. For instance, using the widely adopted node-wise approach, model evaluation can account for up to 94% of the time in the end-to-end training process due to neighbor explosion, which means that a node accesses its multi-hop neighbors. On the other hand, layer-wise inference avoids the neighbor explosion problem by conducting inference layer by layer such that the nodes only need their one-hop neighbors in each layer. However, implementing layer-wise inference requires substantial engineering efforts because users need to manually decompose a GNN model into layers for computation and split workload into batches to fit into device memory. In this paper, we develop Deep Graph Inference (DGI) -- a system for easy and efficient GNN model inference, which automatically translates the training code of a GNN model for layer-wise execution. DGI is general for various GNN models and different kinds of inference requests, and supports out-of-core execution on large graphs that cannot fit in CPU memory. Experimental results show that DGI consistently outperforms layer-wise inference across different datasets and hardware settings, and the speedup can be over 1,000x.
translated by 谷歌翻译
视觉变压器(VIT)的最新进展在视觉识别任务中取得了出色的表现。卷积神经网络(CNNS)利用空间电感偏见来学习视觉表示,但是这些网络在空间上是局部的。 VIT可以通过其自我注意力机制学习全球表示形式,但它们通常是重量重量,不适合移动设备。在本文中,我们提出了交叉功能关注(XFA),以降低变压器的计算成本,并结合有效的移动CNN,形成一种新型有效的轻质CNN-CNN-VIT混合模型Xformer,可以用作通用的骨干链。学习全球和本地代表。实验结果表明,Xformer在不同的任务和数据集上的表现优于大量CNN和基于VIT的模型。在ImagEnet1k数据集上,XFormer以550万参数的优先级达到78.5%的TOP-1精度,比EdgitionNet-B0(基于CNN)(基于CNN)和DEIT(基于VIT)(基于VIT)的参数高2.2%和6.3%。当转移到对象检测和语义分割任务时,我们的模型也表现良好。在MS Coco数据集上,Xformer在Yolov3框架中仅超过10.5 AP(22.7-> 33.2 AP),只有630万参数和3.8克Flops。在CityScapes数据集上,只有一个简单的全MLP解码器,Xformer可实现78.5的MIOU,而FPS为15.3,超过了最先进的轻量级分割网络。
translated by 谷歌翻译
共形预测是一种简单而强大的工具,可以无需任何分布假设来量化不确定性。但是,现有方法只能提供平均覆盖范围保证,这与更强的条件覆盖范围保证相比并不理想。尽管实现确切的条件覆盖范围是不可能的,但近似条件覆盖范围仍然是一个重要的研究方向。在本文中,我们通过利用条件分布的局部近似来提出修改的不符合得分。修改后的分数继承了分裂保形方法的精神,与完整的保形方法相比,这是简单而有效的,但更好地近似条件覆盖范围保证。各种数据集的经验结果,包括图像上的高维年龄回归,表明我们的方法与现有方法相比提供了更紧密的间隔。
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
近年来,加固学习(RL)由于其各种应用的巨大成功,近年来越来越兴趣。但是,标准的RL算法只能用于单次奖励功能,并且不能快速适应未经奖励功能。在本文中,我们提倡一般的钢筋学习操作员视图,这使我们能够直接估计从奖励函数映射到价值函数的操作员。学习操作员的好处是我们可以将任何新的奖励函数作为输入纳入并以零拍方式达到其相应的值函数。为了近似这种特殊类型的操作员,我们根据其理论属性设计了许多新颖的操作员神经网络架构。我们的操作网络设计优于现有的方法和通用运营商网络的标准设计,我们展示了在几个任务中的操作员Deep Q学习框架的好处,包括奖励转移到离线政策评估(OPE)并奖励离线转移一系列任务中的策略优化。
translated by 谷歌翻译
近年来,异构图形神经网络(HGNNS)一直在开花,但每个工作所使用的独特数据处理和评估设置会让他们的进步完全了解。在这项工作中,我们通过使用其官方代码,数据集,设置和超参数来展示12个最近的HGNN的系统再现,揭示了关于HGNN的进展的令人惊讶的结果。我们发现,由于设置不当,简单的均匀GNN,例如GCN和GAT在很大程度上低估了。具有适当输入的GAT通常可以匹配或优于各种场景的所有现有HGNN。为了促进稳健和可重复的HGNN研究,我们构建异构图形基准(HGB),由具有三个任务的11个不同数据集组成。 HGB标准化异构图数据分割,特征处理和性能评估的过程。最后,我们介绍了一个简单但非常强大的基线简单 - HGN - 这显着优于HGB上以前的所有模型 - 以加速未来HGNN的进步。
translated by 谷歌翻译
当代视觉标题模型通常是幻觉的对象,其实际上并不是一种场景,因为目视错误分类或过度依赖导致视觉信息与目标词汇词之间的语义不一致。最常见的方式是鼓励标题模型将生成的对象字或短语动态链接到图像的适当区域,即接地图像标题(GIC)。然而,GIC利用辅助任务(接地对象),这些任务(接地对象)没有解决对象幻觉的关键问题,即语义不一致。在本文中,我们对上面的问题进行了一种小说 - 利用视觉和语言模式之间的语义一致性。具体而言,我们提出了与GIC的共识RRAPH表示学习框架(CGRL),其纳入接地标题管道的共识表示。通过将可视图(例如,场景图)对准到图表中的节点和边的语言图来学习共识。通过对齐的共识,标题模型可以捕获正确的语言特征和视觉相关性,然后进一步接地适当的图像区域。我们验证了我们模型的有效性,对象幻觉(-9%主席)在Flickr30k实体数据集中显着下降。此外,我们的CGR还通过多种自动度量和人体评估评估,结果表明,该方法可以同时提高图像标题(+2.9苹果酒)和接地的性能(+2.3 f1loc)。
translated by 谷歌翻译
最近的基于学习的初始化算法已经达到了在删除视频中的不期望的对象之后完成缺失区域的令人信服的结果。为了保持帧之间的时间一致性,3D空间和时间操作通常在深网络中使用。但是,这些方法通常遭受内存约束,只能处理低分辨率视频。我们提出了一种用于高分辨率视频侵略的新型空间剩余聚集框架。关键的想法是首先在下采样的低分辨率视频上学习和应用空间和时间内染色网络。然后,我们通过将学习的空间和时间图像残差(细节)聚合到上采样的染色帧来细化低分辨率结果。定量和定性评估都表明,我们可以生产出比确定高分辨率视频的最先进的方法产生更多的时间相干和视觉上吸引力。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译